31 research outputs found

    Multi-task Self-Supervised Learning for Human Activity Detection

    Full text link
    Deep learning methods are successfully used in applications pertaining to ubiquitous computing, health, and well-being. Specifically, the area of human activity recognition (HAR) is primarily transformed by the convolutional and recurrent neural networks, thanks to their ability to learn semantic representations from raw input. However, to extract generalizable features, massive amounts of well-curated data are required, which is a notoriously challenging task; hindered by privacy issues, and annotation costs. Therefore, unsupervised representation learning is of prime importance to leverage the vast amount of unlabeled data produced by smart devices. In this work, we propose a novel self-supervised technique for feature learning from sensory data that does not require access to any form of semantic labels. We learn a multi-task temporal convolutional network to recognize transformations applied on an input signal. By exploiting these transformations, we demonstrate that simple auxiliary tasks of the binary classification result in a strong supervisory signal for extracting useful features for the downstream task. We extensively evaluate the proposed approach on several publicly available datasets for smartphone-based HAR in unsupervised, semi-supervised, and transfer learning settings. Our method achieves performance levels superior to or comparable with fully-supervised networks, and it performs significantly better than autoencoders. Notably, for the semi-supervised case, the self-supervised features substantially boost the detection rate by attaining a kappa score between 0.7-0.8 with only 10 labeled examples per class. We get similar impressive performance even if the features are transferred from a different data source. While this paper focuses on HAR as the application domain, the proposed technique is general and could be applied to a wide variety of problems in other areas

    Distributed Fault Detection in Smart Spaces Based on Trust Management

    Get PDF
    AbstractApplication performance in a smart space is affected by faulty behaviours of nodes and communication networks. Detection of faults helps diagnosis of problems and maintenance can be done to restore performance, for example, by replacing or reconfiguring faulty parts. Fault detection methods in the literature are too complex for typical low-resource devices and they do not perform well in detecting intermittent faults. We propose a fully distributed fault detection method that relies on evaluating statements about trustworthiness of aggregated data from neighbors. Given one or more trust statements that describe a fault-free state, the trustor node determines for each observation coming from the trustee whether it is an outlier or not. Several fault types can be explored using different trust statements whose parameters are assessed differently. The trustor subsequently captures the observation history of the trustee node in only two evidence variables using evidence update rules that give more weight to recent observations. The proposed method detects not only permanent faults but also intermittent faults with high accuracy and low false alarm rate

    Federated Self-Supervised Learning of Multi-Sensor Representations for Embedded Intelligence

    Get PDF
    Smartphones, wearables, and Internet of Things (IoT) devices produce a wealth of data that cannot be accumulated in a centralized repository for learning supervised models due to privacy, bandwidth limitations, and the prohibitive cost of annotations. Federated learning provides a compelling framework for learning models from decentralized data, but conventionally, it assumes the availability of labeled samples, whereas on-device data are generally either unlabeled or cannot be annotated readily through user interaction. To address these issues, we propose a self-supervised approach termed \textit{scalogram-signal correspondence learning} based on wavelet transform to learn useful representations from unlabeled sensor inputs, such as electroencephalography, blood volume pulse, accelerometer, and WiFi channel state information. Our auxiliary task requires a deep temporal neural network to determine if a given pair of a signal and its complementary viewpoint (i.e., a scalogram generated with a wavelet transform) align with each other or not through optimizing a contrastive objective. We extensively assess the quality of learned features with our multi-view strategy on diverse public datasets, achieving strong performance in all domains. We demonstrate the effectiveness of representations learned from an unlabeled input collection on downstream tasks with training a linear classifier over pretrained network, usefulness in low-data regime, transfer learning, and cross-validation. Our methodology achieves competitive performance with fully-supervised networks, and it outperforms pre-training with autoencoders in both central and federated contexts. Notably, it improves the generalization in a semi-supervised setting as it reduces the volume of labeled data required through leveraging self-supervised learning.Comment: Accepted for publication at IEEE Internet of Things Journa

    Power-managed smart lighting using a semantic interoperability architecture

    Full text link

    Runtime evaluation of cognitive systems for non-deterministic multiple output classification problems

    Get PDF
    Cognitive applications that involve complex decision making such as smart lighting have non-deterministic input-output relationships, i.e., more than one output may be acceptable for a given input. We refer them as non-deterministic multiple output classification (nDMOC) problems, which are particularly difficult for machine learning (ML) algorithms to predict outcomes accurately. Evaluating ML algorithms based on commonly used metrics such as Classification Accuracy (CA) is not appropriate. In a batch setting, Relevance Score (RS) was proposed as a better alternative, which determines how relevant a predicted output is to a given context. We introduce two variants of RS to evaluate ML algorithms in an online setting. Furthermore, we evaluate the algorithms using different metrics for two datasets that have non-deterministic input-output relationships. We show that instance-based learning provides superior RS performance and the RS performance keeps improving with an increase in the number of observed samples, even after the CA performance has converged to its maximum. This is a crucial result as it illustrates that RS is able to capture the performance of ML algorithms in the context of nDMOC problems while CA cannot

    Understanding IoT systems: a life cycle approach

    No full text
    Internet of Things (IoT) systems and the corresponding network architectures are complex due to distributed services on many IoT devices collaboratively fulfilling common goals of IoT applications. System requirements for different types of IoT application domains are still not well-established. The life cycle view is one of the views used for system architecting, showing different stakeholders' concerns at every stage of the life cycle to derive system requirements. We employ the life cycle view to understand IoT systems in different IoT application domains. Our contribution is the definition of a generic life cycle model for IoT, which is specified by observations on life cycles of existing IoT solutions and generalizations taking into account important IoT functionalities and quality attributes. Such generic life cycle model for IoT was non-existent

    Federated Self-Training for Data-Efficient Audio Recognition

    No full text
    Federated learning is a distributed machine learning paradigm dealing with decentralized and personal datasets. Since data reside on devices like smartphones, labeling is entrusted to the clients or labels are extracted in an automated way. Specifically, in the case of audio data, acquiring semantic annotations can be prohibitively expensive and time-consuming. As a result, an abundance of audio data remains unlabeled and unexploited on users’ devices. Existing federated learning approaches largely focus on supervised learning without harnessing the unlabeled data. Here, we study the problem of semi-supervised learning of audio models in conjunction with federated learning. We propose FedSTAR, a self-training approach to exploit large-scale on-device unlabeled data to improve the generalization of audio recognition models. We conduct experiments on diverse public audio classification datasets and investigate the performance of our models under varying percentages of labeled data and show that with as little as 3% labeled data, FedSTAR on average can improve the recognition rate by 13.28% compared to the fully-supervised federated model

    Synthesizing and reconstructing missing sensory modalities in behavioral context recognition

    Get PDF
    Detection of human activities along with the associated context is of key importance for various application areas, including assisted living and well-being. To predict a user’s context in the daily-life situation a system needs to learn from multimodal data that are often imbalanced, and noisy with missing values. The model is likely to encounter missing sensors in real-life conditions as well (such as a user not wearing a smartwatch) and it fails to infer the context if any of the modalities used for training are missing. In this paper, we propose a method based on an adversarial autoencoder for handling missing sensory features and synthesizing realistic samples. We empirically demonstrate the capability of our method in comparison with classical approaches for filling in missing values on a large-scale activity recognition dataset collected in-the-wild. We develop a fully-connected classification network by extending an encoder and systematically evaluate its multi-label classification performance when several modalities are missing. Furthermore, we show class-conditional artificial data generation and its visual and quantitative analysis on context classification task; representing a strong generative power of adversarial autoencoders

    Designing IoT systems: patterns and managerial conflicts

    No full text
    The first step in a system design process is to perform domain analysis. This entails acquiring stakeholder concerns throughout the life cycle of the system. The second step is to design solutions addressing those stakeholder concerns. This entails applying patterns for solving known, recurring problems. For these there are architecture patterns and design patterns for architecture design and detailed design respectively. For Internet of Things (IoT) systems such patterns are hardly defined yet since experience is just evolving. In this paper, we propose our definition of an IoT pattern along with its formal specification, explained by a running example. IoT systems are characterized by the variety of stakeholders involved throughout their life cycle, therefore our pattern specification includes means for identifying possible conflicts between these stakeholders
    corecore